22 research outputs found

    The SMART-IÂČ: A new approach for the design of immersive audio-visual environments

    No full text
    International audienceThe SMART-IÂČ aims at creating a precise and coherent virtual environment by providing users with both audio and visual accurate localization cues. Wave field synthesis, for audio rendering, and tracked passive stereoscopy, for visual rendering, individually permit high quality spatial immersion within an extended space. The proposed system combines these two rendering approaches through the use of large multi-actuator panels used as both loudspeaker arrays and as projection screens, providing a more perceptually consistent rendering

    SMART-IÂČ: A Spatial Multi-users Audio-visual Real Time Interactive Interface

    No full text
    International audienceThe SMART-I2 aims at creating a precise and coherent virtual environment by providing users with both audio and visual accurate localization cues. It is known that for audio rendering, Wave Field Synthesis, and for visual rendering, Tracked Stereoscopy, individually permit high quality spatial immersion within an extended space. The proposed system combines these two rendering approaches through the use of a large Multi-Actuator Panel used as both a loudspeaker array and as a projection screen, considerably reducing audio-visual incoherencies. The system performance has been confirmed by an objective validation of the audio interface and a perceptual evaluation of the audio-visual rendering

    SMART-IÂČ: Spatial Multi-users Audio-visual Real Time Interactive Interface, a broadcast application context

    No full text
    International audienceSMART-I2 is a high quality 3D audio-visual interactive rendering system. In SMART-I2, the screen is also used as a multichannel loudspeaker. The spatial audio rendering is based on Wave Field Synthesis, an approach that creates a coherent spatial perception of sound over a large listening area. The azimuth localization accuracy of the system has been verifed by a perceptual experiment. Contrary to conventional systems, SMART-I2 is able to realize a high degree of 3D audio-visual integration with almost no compromise on either the audio or the graphics rendering quality. Such a system can provide benefits to a wide range of applications. Index Terms-- Audio-visual integratio

    Audio, visual, and audio-visual egocentric distance perception in virtual environments.

    No full text
    International audiencePrevious studies have shown that in real environments, distances are visually correctly estimated. In visual (V) virtual environments (VEs), distances are systematically underestimated. In audio (A) real and virtual environments, near distances (2 m) are underestimated. However, little is known regarding combined A and V interactions on the egocentric distance perception in VEs. In this paper we present a study of A, V, and AV egocentric distance perception in VEs. AV rendering is provided via the SMART-I2 platform using tracked passive visual stereoscopy and acoustical wave field synthesis (WFS). Distances are estimated using triangulated blind walking under A, V, and AV conditions. Distance compressions similar to those found in previous studies are observed under each rendering condition. The audio and visual modalities appears to be of similar precision for distance estimations in virtual environments. This casts doubts on the commonly accepted visual capture theory in distance perception

    Prediction of harmonic distortion generated by electro-dynamic loudspeakers using cascade of Hammerstein models

    No full text
    International audienceAudio rendering systems are always slightly nonlinear. Their non-linearities must be modeled and mea- sured for quality evaluation and control purposes. Cascade of Hammerstein models describes a large class of non-linearities. To identify the elements of such a model, a method based on a phase property of exponential sine sweeps is proposed. A complete model of non-linearities is identified from a single mea- surement. Cascade of Hammerstein models corresponding to an electro-dynamic loudspeaker are identified this way. Harmonic distortion is afterward predicted using the identified models. Comparisons with clas- sical measurements techniques show that harmonic distortion is accurately predicted by the identified models over the entire audio frequency range for any desired input amplitude

    Identification of cascade of Hammerstein models for the description of non-linearities in vibrating devices

    No full text
    International audienceIn a number of vibration applications, systems under study are slightly nonlinear. It is thus of great importance to have a way to model and to measure these nonlinearities in the frequency range of use. Cascade of Hammerstein models conveniently allows one to describe a large class of nonlinearities. A simple method based on a phase property of exponential sine sweeps is proposed to identify the structural elements of such a model from only one measured response of the system. Mathematical foundations and practical implementation of the method are discussed. The method is afterwards validated on simulated and real systems. Vibrating devices such as acoustical transducers are well approximated by cascade of Hammerstein models. The harmonic distortion generated by those transducers can be predicted by the model over the entire audio frequency range for any desired input amplitude. Agreement with more time consuming classical distortion measurement methods was found to be good

    From vibration to perception: using Large Multi-Actuator Panels (LaMAPs) to create coherent audio-visual environments

    No full text
    International audienceVirtual reality aims at providing users with audio-visual worlds where they will behave and learn as if they were in the real world. In this context, specific acoustic transducers are needed to fulfill simultaneous spatial requirements on visual and audio rendering in order to make them coherent. Large multi-actuator panels (LaMAPs) allow for the combined construction of a projection screen and loudspeaker array, and thus allows for the coherent creation of an audio and visual virtual world. They thus constitute an attractive alternative to electro-dynamical loudspeakers and multi-actuator panels previously used. In this paper, the vibroacoustic behavior of LaMAPs is studied and it is shown that LaMAPs can be used as secondary sources for wave field synthesis (WFS). The auditory virtual environment created by LaMAPs driven by WFS is then perceptually assessed in an experiment where users estimate the egocentric distance of an audio virtual object by means of triangulation. Vibro-acoustic and perceptual results indicate that LaMAPs driven by WFS can be confidently used for the creation of auditory virtual worlds

    Assessment of the impact of spatial audiovisual coherence on source unmasking

    No full text
    International audienceThe present study aims at evaluating the contribution of spatial audiovisual coherence for sound source unmasking for live music mixing. Sound engineers working with WFS technologies for live sound mixing have reported that their mixing methods have radically changed. Using conventional mixing methods, the audio spectrum is balanced in order to get each instrument intelligible inside the stereo mix. In contrast, when using WFS technologies, the source intelligibility can be achieved thanks to spatial audiovisual coherence and/or sound spatialization (and without using spectral modifications). The respective effects of spatial audiovisual coherence and sound spatialization should be perceptually evaluated. As a first step, the ability of naive and expert subjects to identify a spatialized mix was evaluated by a discrimination task. For this purpose, live performances (rock, jazz, and classic) were played back to subjects with and without stereoscopic video display and VBAP or WFS audio rendering. Two sound engineers realized the audio mixing for three pieces of music and for both audio technologies in the same room where the test have been carried out

    Impact of spatial audiovisual coherence on source unmasking

    No full text
    8 pagesInternational audienceThe influence of the spatial audiovisual coherence is evaluated in the context of a video recording of live music. In this context, audio engineers currently balance the audio spectrum to unmask each music instrument getting it intelligible inside the stereo mix. In contrast, sound engineers using spatial audio technologies have reported that sound source equalization is unnecessary in live music mixing when the sound sources are played at the same location of the physical instruments. The effects of spatial audiovisual coherence and sound spatialization have been assessed: expert subjects were asked to compare two mixes in audio only and in audiovisual mode. For this aim, music concerts were visually projected and audio rendered using WFS. Three sound engineers did the audio mixing for all pieces of music in the same room were the test have been carried out

    Parallel Hammerstein Models Identification using Sine Sweeps and the Welch Method

    Get PDF
    Linearity is a common assumption for many real life systems. But in many cases, the nonlinear behavior of systems cannot be ignored and has to be modeled and estimated. Among the various classes of nonlinear models present in the literature, Parallel Hammertein Models (PHM) are interesting as they are at the same time easy to understand as well as to estimate when using exponential sine sweeps (ESS) based methods. However, the classical EES- based estimation procedure for PHM relies on a very speci c input signal (ESS), which limits its use in practice. A method is proposed here based on the Welch method that allows for PHM estimation with arbitrary sine sweeps (ASS) which are a much broader class of input signals than ESS. Results show that for various ASS, the proposed method provides results that are in excellent agreement with the ones obtained with the classical ESS method
    corecore